<?xml version="1.0" encoding="ISO-8859-1"?>
<metadatalist>
	<metadata ReferenceType="Conference Proceedings">
		<site>sibgrapi.sid.inpe.br 802</site>
		<holdercode>{ibi 8JMKD3MGPEW34M/46T9EHH}</holdercode>
		<identifier>8JMKD3MGPBW34M/3A32SLE</identifier>
		<repository>sid.inpe.br/sibgrapi/2011/07.06.23.53</repository>
		<lastupdate>2011:07.06.23.53.51 sid.inpe.br/banon/2001/03.30.15.38 administrator</lastupdate>
		<metadatarepository>sid.inpe.br/sibgrapi/2011/07.06.23.53.51</metadatarepository>
		<metadatalastupdate>2022:06.14.00.07.08 sid.inpe.br/banon/2001/03.30.15.38 administrator {D 2011}</metadatalastupdate>
		<doi>10.1109/SIBGRAPI.2011.21</doi>
		<citationkey>RauberBern:2011:KeMuPe</citationkey>
		<title>Kernel Multilayer Perceptron</title>
		<format>DVD, On-line.</format>
		<year>2011</year>
		<numberoffiles>1</numberoffiles>
		<size>154 KiB</size>
		<author>Rauber, Thomas W.,</author>
		<author>Berns, Karsten,</author>
		<affiliation>Departamento de Informática, Centro Tecnológico, Universidade Federal do Espírito Santo</affiliation>
		<affiliation>Robotics Research Lab, Department of Computer Science, University of Kaiserslautern, Gottlieb-Daimler-Strasse, 67663 Kaiserslautern, Germany</affiliation>
		<editor>Lewiner, Thomas,</editor>
		<editor>Torres, Ricardo,</editor>
		<e-mailaddress>thomas@inf.ufes.br</e-mailaddress>
		<conferencename>Conference on Graphics, Patterns and Images, 24 (SIBGRAPI)</conferencename>
		<conferencelocation>Maceió, AL, Brazil</conferencelocation>
		<date>28-31 Aug. 2011</date>
		<publisher>IEEE Computer Society</publisher>
		<publisheraddress>Los Alamitos</publisheraddress>
		<booktitle>Proceedings</booktitle>
		<tertiarytype>Full Paper</tertiarytype>
		<transferableflag>1</transferableflag>
		<versiontype>finaldraft</versiontype>
		<keywords>Multilayer Perceptron, kernel mapping.</keywords>
		<abstract>We enhance the Multilayer Perceptron to map a feature vector not only from the original d-dimensional feature space, but from an intermediate implicit Hilbert feature space in which kernels calculate inner products. The kernel substitutes the usual inner product between weight vectors and the input vector (or the feature vector of the hidden layer). The objective is to boost the generalization capability of this universal function approximator even more. Classification experiments with standard Machine Learning data sets are shown. We are able to improve the classification accuracy performance criterion for certain kernel types and their intrinsic parameters for the majority of the data sets.</abstract>
		<language>en</language>
		<targetfile>86589.pdf</targetfile>
		<usergroup>thomas@inf.ufes.br</usergroup>
		<visibility>shown</visibility>
		<mirrorrepository>sid.inpe.br/banon/2001/03.30.15.38.24</mirrorrepository>
		<nexthigherunit>8JMKD3MGPEW34M/46SKNPE</nexthigherunit>
		<nexthigherunit>8JMKD3MGPEW34M/4742MCS</nexthigherunit>
		<citingitemlist>sid.inpe.br/sibgrapi/2022/05.15.00.56 4</citingitemlist>
		<hostcollection>sid.inpe.br/banon/2001/03.30.15.38</hostcollection>
		<agreement>agreement.html .htaccess .htaccess2</agreement>
		<lasthostcollection>sid.inpe.br/banon/2001/03.30.15.38</lasthostcollection>
		<url>http://sibgrapi.sid.inpe.br/rep-/sid.inpe.br/sibgrapi/2011/07.06.23.53</url>
	</metadata>
</metadatalist>